![]() COLLABORATIVE LEARNING METHOD OF AN ARTIFICIAL NEURON NETWORK WITHOUT DISCLOSURE OF LEARNING DATA
专利摘要:
The present invention relates to a method of federative learning of an artificial neural network model on a plurality of training data sets. The training method involves a plurality of data providers (120) each having a distinct set of training data as well as an aggregation platform (110) aggregating, at each iteration, partial models trained on a sub-plurality of these sets. At each iteration, the platform selects a sub-plurality of data providers and provides them with the parameters of the model, in encrypted form in the homomorphic domain. Each training data provider decrypts these parameters, trains the model on its dataset and returns the encrypted parameters of the resulting partial model to the platform. The aggregation platform then performs, in the homomorphic domain, a combination of the partial models obtained from the different data providers, to obtain a global model. Figure for the abstract: Fig. 1 公开号:FR3097353A1 申请号:FR1906241 申请日:2019-06-12 公开日:2020-12-18 发明作者:Renaud Sirdey;Sergiu CARPOV 申请人:Commissariat a lEnergie Atomique CEA;Commissariat a lEnergie Atomique et aux Energies Alternatives CEA; IPC主号:
专利说明:
[0001] The present invention relates to the general field of artificial intelligence and more particularly that of the collaborative learning of an artificial neural network. It also concerns the field of confidentiality-preserving computing. [0002] State of the prior art [0003] Training an artificial neural network is usually done by training a model on data from a data provider. The computer platform in charge of learning is sometimes confused with the data provider itself. More often, however, the platform accesses data from an external provider through a client/server architecture, with the server managing the training database and the client handling the training of the model. [0004] More recently, a collaborative learning method has been proposed to train an artificial neural network on data stored in a distributed manner, for example by data present in mobile user terminals. In such a collaborative context, several distinct data providers each have a distinct set of training data and agree to collaborate, via an aggregation platform, in order to obtain a model trained on the union of these sets of data. [0005] A description of a collaborative learning method, called federated learning , can be found in the article by HB McMahan et al. entitled “Communication-efficient learning of deep networks from decentralized data” published in Proc. of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 54, 2017. [0006] Said federated learning method uses a plurality of data providers as well as an aggregation platform. Its operation can be summarized as follows: [0007] The aggregation platform starts from an initial model common to all data providers. This model is updated iteratively, the model of the current iteration is called current model. The current model is defined by its parameters, namely the synaptic weights between neurons of adjacent layers of the network. [0008] At each iteration, the aggregation platform selects all or some of the data providers and sends them the parameters of the current model. [0009] Each data provider thus selected then proceeds to a predetermined number of learning iterations, from the current model, on its own set of data. The training result is a locally updated model. [0010] The parameters updated locally by each provider are then transmitted to the aggregation platform which combines them to produce an update of the current model. [0011] The parameters of the current model are then transmitted to the data providers for the next iteration [0012] The process repeats until a stopping criterion is met. The final version of the model, in other words the model parameters at the end of training, is transmitted by the aggregation platform to the various data providers. [0013] It is thus understood that each data provider contributes to the development of a global model by training it on its own set of data. It is important to note that the data of the different providers is not transmitted to the platform, only the updated parameters are. [0014] Federative learning has interesting properties. First, it guarantees a certain level of confidentiality between data providers since they cannot access each other's training datasets. Then, the volume of data exchanged between the aggregation platform and the various providers is relatively modest compared to the size of the training data sets (it is more economical to transmit the model parameters than the training data themselves). -same). Finally, most of the calculations, namely that induced by the execution of the learning algorithm, is deported to the data providers, the platform being satisfied with a simple combination of the partial models. [0015] However, this federated learning method also has drawbacks. Thus, the current model, and in particular the final version of the model, at the end of the learning process, is known both to the platform and to the various data providers. However, in certain cases, it is desirable to guarantee that the parameters of the model remain confidential vis-à-vis the platform itself. In addition, knowledge of the parameters of the partial models, locally trained, can allow the platform, by model inversion techniques, to trace back to the learning data. [0016] So, for example, if several cybersecurity companies, each with a database of computer attack signatures, decide to collaborate to develop a more efficient neural network classification model, they do not necessarily want to share information on attacks that have taken place on the networks of their client companies, let alone disclose to an aggregation platform the final model that would betray their detection capacity. [0017] An object of the present invention is therefore to provide a method of federative learning of an artificial neural network which does not have the aforementioned drawbacks. [0018] Presentation of the invention [0019] de fournisseurs de données () disposant chacun d’un ensemble de données d’apprentissage distinct () ainsi qu’une plateforme d’agrégation (), ladite méthode débutant par une étape d’initialisation pour initialiser les paramètres du modèle, et se poursuivant par une pluralité d’itérations successives, chaque itération mettant à jour les paramètres d’un modèle courant du réseau de neurones. Ladite méthode est spécifique en ce que :- à l’étape d’initialisation, les fournisseurs de données partagent la clé privée et la clé publique d’un cryptosystème homomorphe () ;et qu’à chaque chaque itération,- la plateforme d’agrégation sélectionne une sous-pluralitéde fournisseurs de données et transmet auxfournisseurs de données ainsi sélectionnés, les paramètres du modèle courant, chiffrés en homomorphe,;- chaque fournisseur de données sélectionné déchiffre les paramètres ainsi chiffrés au moyen de la clé privée du cryptosystème homomorphe, entraine le modèle courant sur son ensemble de données d’apprentissage pour obtenir un modèle partiel, chiffre les paramètres du modèle partiel ainsi obtenu et les transmet sous forme chiffrée à la plateforme d’agrégation ;- la plateforme d’agrégation effectue, dans le domaine homomorphe, une combinaison des paramètres des modèles partiels obtenus par les différents fournisseurs de données, pour obtenir les paramètres d’un nouveau modèle courant, chiffrés dans le domaine homomorphe ;et, lorsqu’un critère d’arrêt est satisfait, chaque fournisseur de données déchiffre, au moyen de la clé privée du cryptosystème homomorphe, les paramètres du dernier modèle courant pour obtenir un modèle final, entrainé sur la réunion desdits ensembles de données d’apprentissage.The present invention is defined by a method of federated learning of an artificial neural network model, the model being characterized by parameters giving the synaptic coefficients of said network, the learning method implementing a plurality data providers ( ) each having a separate set of training data ( ) as well as an aggregation platform ( ), said method beginning with an initialization step to initialize the parameters of the model, and continuing with a plurality of successive iterations, each iteration updating the parameters of a current model of the neural network. Said method is specific in that: - at the initialization stage, the data providers share the private key and the public key of a homomorphic cryptosystem ( ); and that at each each iteration , - the aggregation platform selects a sub-plurality from data providers and transmits to data providers thus selected, the parameters of the current model, encrypted in homomorphic form, ; - each selected data provider decrypts the parameters thus encrypted by means of the private key of the homomorphic cryptosystem, trains the current model on its training data set to obtain a partial model, encrypts the parameters of the partial model thus obtained and transmits them in encrypted form to the aggregation platform; - the aggregation platform performs, in the homomorphic domain, a combination of the parameters of the partial models obtained by the various data providers, to obtain the parameters of a new current model, encrypted in the homomorphic domain; and, when a stopping criterion is satisfied, each data provider decrypts, by means of the private key of the homomorphic cryptosystem, the parameters of the last current model to obtain a final model, trained on the union of said data sets of learning. [0020] , chaque fournisseur de données sélectionné chiffre les paramètres du modèle partiel au moyen de la clé publique du cryptosystème homomorphe et transmet les paramètres du modèle partiel ainsi chiffrés,, à la plateforme d’agrégation.According to a first embodiment, at each iteration , each selected data provider encrypts the parameters of the partial model by means of the public key of the homomorphic cryptosystem and transmits the parameters of the partial model thus encrypted, , to the aggregation platform. [0021] Typically, the aggregation platform performs an averaging, in the homomorphic domain, of the respective parameters of the partial models obtained by the various selected suppliers. [0022] , et transmet à la plateforme d’agrégation la clé publique du cryptosystème homomorphe, chiffrée au moyen de cette clé secrète,.According to a second embodiment at the initialization step, each data provider also generates a stream cipher secret key, , and transmits to the aggregation platform the public key of the homomorphic cryptosystem, encrypted using this secret key, . [0023] , chaque fournisseur de données sélectionné,, chiffre les paramètres du modèle partiel au moyen de sa clé secrète,, et transmet les paramètres du modèle partiel ainsi chiffrés,, à la plateforme d’agrégation.In this case, at each iteration , each selected data provider, , encrypts the parameters of the partial model using its secret key, , and transmits the parameters of the partial model thus encrypted, , to the aggregation platform. [0024] , la plateforme d’agrégation effectue alors un transchiffrement des paramètres des modèles partiels des différents fournisseurs de données sélectionnés, pour obtenir ces mêmes paramètres chiffrés dans le domaine homomorphe.At each iteration , the aggregation platform then carries out a transciphering of the parameters of the partial models of the different selected data providers, to obtain these same encrypted parameters in the homomorphic domain. [0025] Advantageously, the aggregation platform then performs an averaging, in the homomorphic domain, of the respective parameters of the partial models thus transciphered. [0026] Advantageously, in the second embodiment, the secret keys of the different data providers are chosen to be identical, the common secret key having been shared between the different data providers by means of a multiparty key exchange protocol. [0027] According to a first variant, the platform performs an averaging in parallel of the different parameters of the partial models, by means of batch processing on a homomorphic figure representing a composite of the different parameters of the partial models. [0028] According to a second variant, the platform performs an average of the parameters of the same rank of the partial models, by means of batch processing on a homomorphic figure representing a composite of the parameters of this rank of the partial models of the different data providers. [0029] Brief description of figures [0030] Other characteristics and advantages of the invention will appear on reading a preferred embodiment of the invention, described with reference to the attached figures, including: [0031] schematically represents the exchanges between an aggregation platform and learning data providers in a federated learning method of an artificial neural network, according to a first embodiment of the invention; [0032] schematically represents the exchanges between an aggregation platform and learning data providers in a federated learning method of an artificial neural network, according to a second embodiment of the invention; [0033] represents a flowchart of a method of federative learning of an artificial neural network, according to the first embodiment of the invention; [0034] represents a flowchart of a method of federative learning of an artificial neural network, according to the second embodiment of the invention. [0035] We will consider in the following a plurality of data providers each having a set of training data, these data providers wishing to federate these sets of data to train an artificial neural network while preserving the confidentiality of these data. compared to others. The general training of the neural network is devolved to a computer platform separate from the different data providers, referred to below as the aggregation platform. [0036] The learning of the neural network can be of the supervised or unsupervised type. In the first case, it will be understood that the training data comprises labeled examples, i.e. values of predictor variables (or network input data) for which the corresponding values of the target variables are known. In the second case, the neural network does not presuppose any labeling of the learning data, it merely groups the examples by category (clustering). [0037] The artificial neural network is trained by the computer platform to perform a predetermined task. This task can be a classification, a prediction, or a simple dimension reduction of the input data space (neural network functioning as an autoencoder). The dimension reduction can also be itself prior to another task, for example classification by a second neural network in cascade with the previous one. [0038] The aggregation platform starts from an initial model of the neural network. This model is defined by its parameters, i.e., more precisely, by the synaptic coefficients between neurons of adjacent layers, as well as its activation function. [0039] The initial model is shared between all parties, in other words it is common to the data providers and the aggregation platform. For example, it may result from an initial convention, such as a rough model already disclosed, or from a random selection of the various parameters in question. In the latter case, the parameters of the initial model are communicated by the aggregation platform to the various data providers. [0040] This initial model is then subject to an iterative update involving the various data providers and the aggregation platform. The model updated during an iteration is called the current model. [0041] At each iteration, the following operations are performed: [0042] The aggregation platform first selects all or some of the data providers for partial updates. This selection may in particular result from a random draw of a predetermined number of data providers at each iteration. This option may be preferred when the number of data providers is particularly high and/or their computing resources are limited, for example if they are mobile terminals or nodes of an IoT system. On the other hand, when the data providers are servers, it may be preferable to use them all. [0043] , d’un cryptosystème homomorphe. On supposera que les fournisseurs de données possèdent tous la clé privée correspondante,.On comprendra que le terme clé privée signifie ici que la clé appartient au groupe des fournisseurs de données. Par exemple, cette clé privée aura pu être partagée entre les fournisseurs de données au moyen d’un protocole d’échange de clés Diffie-Hellman multipartite.The aggregation platform then transmits the parameters of the current model, encrypted using a homomorphic cipher, to the data providers thus selected. For example, the parameters of the current model could be encrypted using the public key, , of a homomorphic cryptosystem . It will be assumed that the data providers all have the corresponding private key, .It will be understood that the term private key means here that the key belongs to the group of data providers. For example, this private key could have been shared between data providers using a multi-party Diffie-Hellman key exchange protocol. [0044] .Il peut ainsi construire le modèle courant en clair. Il entraine ensuite le réseau de neurones ainsi obtenu sur son propre ensemble de données d’apprentissage en effectuant un (faible) nombre prédéterminé d’itérations locales. A chaque itération locale, les paramètres du modèle sont mis à jour à l’aide d’un algorithme d’apprentissage tel que l’algorithme de descente de gradient stochastique, connu en soi. Au terme de cette phase d’apprentissage, chaque fournisseur de données, dispose d’un modèle partiellement mis à jour, dénommé ci-après modèle partiel, au sens où il n’a été entrainé que sur son propre corpus de données d’apprentissage.Each selected data provider decrypts the parameters of the current model using the shared private key, .He can thus build the current model in plain text. It then trains the neural network thus obtained on its own training data set by performing a (low) predetermined number of local iterations. At each local iteration, the parameters of the model are updated using a learning algorithm such as the stochastic gradient descent algorithm, known per se. At the end of this learning phase, each data provider has a partially updated model, hereinafter referred to as a partial model, in the sense that it has only been trained on its own corpus of learning data. . [0045] et les transmet à la plateforme d’agrégation.Each data provider having carried out this update then encrypts in homomorphic form the parameters of the partial model that it has obtained, by means of the public key and transmits them to the aggregation platform. [0046] According to a variant, each data provider encrypts the parameters of its partial model using a secret key, for example a symmetric key of a stream cipher, instead of encrypting them in homomorph. The data providers then transmit to the platform the parameters encrypted by their respective secret keys. In this case, the data providers will have transmitted to the aggregation platform, during the initialization phase, their respective secret keys encrypted in homomorphic form. Advantageously, a single secret key will be chosen for all the data providers, this having been able to be shared by a multiparty Diffie-Hellman key exchange protocol. The platform can then carry out a transciphering of the parameters encrypted by symmetric encryption to obtain these same parameters encrypted in homomorphic form. This variant has the advantage of being much less bandwidth-intensive (on the uplink) than the previous one insofar as symmetrical encryption leads to encrypted data of the same size as the unencrypted data. [0047] Regardless of the variant used for encrypting the parameters, the aggregation platform performs a combination, by means of a combination operator evaluated in the homomorphic domain, of the various partial models to obtain an aggregated model, equivalent to that which would have was trained on bringing together datasets from selected vendors. Different combination operators can be considered as described below. [0048] Due to the combination in the homomorphic domain, the parameters of the aggregated model are obtained by the platform in encrypted form. Thus, the aggregation platform has access neither to the partial models nor to the aggregated model resulting from the combination operation. [0049] The aggregation platform can then launch the next iteration by performing a new selection of data provider and transmitting to them the parameters of the aggregated model, this constituting the current model of the new iteration. [0050] The update process continues until a stopping criterion is satisfied, for example after a predetermined number of iterations or after a predetermined number of local iterations. [0051] pour obtenir le modèle final en clair.At the end of the process, the aggregation platform can transmit the parameters of the final model, encrypted in the homomorphic domain, to all the data providers. These then decrypt them using the shared private key to get the final model in clear. [0052] Alternatively, the platform can refrain from carrying out this transmission, in particular if the platform selects all the data providers at each iteration, each data provider then having the aggregated model received at the last iteration and being able to decipher it to obtain a final model in clear. [0053] , peuvent y accéder.In all cases, the data providers have, at the end of the learning process, a neural network that has been trained on the union of the data sets, without disclosure of the latter between providers or to the aggregation platform. . In addition, the platform has access neither to the intermediate models nor to the final model: only the data providers and, more generally, the recipients of the final model having the private key , can access it. [0054] It will be noted that, in the federated learning process described previously, the learning data are never encrypted and/or transmitted in homomorphic form to the platform, only the parameters of the current model are. However, the parameters represent a quantity of data much lower than that of the training data, a fortiori after encryption in the homomorphic domain. Thus, the learning process does not require very high transmission rates. [0055] Finally, it should be noted that the part that consumes computation time, namely the updating of the partial models, is carried out in the clear field by the data providers. These calculations are therefore relatively simple and performed in parallel, which leads to short iteration cycles. Only the combination operation, which is the simplest operation, is performed in the homomorphic domain. Moreover, this operation can also be performed in parallel as we will see later. [0056] Fig. 1 schematically represents the exchanges between the aggregation platform and the learning data providers in a federated learning method according to a first embodiment of the invention. [0057] , et en 120 lesfournisseurs de données,. La plateforme d’agrégation est typiquement hébergée sur un serveur de calcul dans le Cloud. Les fournisseurs de données peuvent également être des serveurs, des terminaux d’utilisateurs, voire des smartphones ou des ordinateurs individuels, ou bien, dans une moindre mesure, des nœuds d’un système IoT.The aggregation platform has been represented at 110, , and in 120 the data providers, . The aggregation platform is typically hosted on a cloud computing server. Data providers can also be servers, user terminals, even smartphones or personal computers, or, to a lesser extent, nodes of an IoT system. [0058] et disposent, au terme de l’apprentissage fédératif, des paramètres en clair du réseau de neurones artificiels, entrainé sur l’ensemble.The data providers, 120, have the respective training data sets, and have, at the end of the federative learning, the clear parameters of the artificial neural network, trained on the set . [0059] Optionally, an additional recipient of the trained model, 150, distinct from the data providers, may also receive the parameters of the artificial neural network in clear, although not having participated in the federated learning. This recipient can also be the sponsor of this learning. [0060] d’un cryptosystème homomorphe. La manière dont la clé privée est partagée entre ces entités ne fait pas partie de la présente invention. Toutefois, comme indiqué plus haut, on pourra recourir à un protocole d’échange de clé de Diffie-Hellman multipartite pour partager cette clé de manière confidentielle vis-à-vis des tiers. Bien entendu, les fournisseurs de données et, le cas échéant, le destinataire additionnel, disposent de la clé publique correspondante. Enfin, la plateforme d’agrégation dispose de la clé publiquemais non de la clé privée.The data providers, 120, as well as, if applicable, the additional recipient, 150, share a common private key of a homomorphic cryptosystem . How the private key is shared between these entities is not part of the present invention. However, as indicated above, it will be possible to use a multi-party Diffie-Hellman key exchange protocol to share this key confidentially vis-à-vis third parties. Of course, the data providers and, where applicable, the additional recipient, have the corresponding public key. Finally, the aggregation platform has the public key but not the private key . [0061] The exchanges between the different entities during an iteration have also been represented in the figure. [0062] les paramètres du modèle, c’est-à-dire les coefficients synaptiques du réseau de neurones artificiels au terme de l’itération, la plateforme d’agrégation dispose de ces paramètres chiffrés dans le domaine homomorphe, soit.If we note synthetically the model parameters, i.e. the synaptic coefficients of the artificial neural network at the end of the iteration , the aggregation platform has these parameters encrypted in the homomorphic domain, i.e. . [0063] sélectionne un ensembledefournisseurs de données parmi, avec, et transmet à chacun d’eux les paramètres chiffrés.The platform select a set of data providers among , with , and transmits to each of them the encrypted parameters . [0064] dedéchiffre les paramètres chiffrés en question à l’aide de la clé privée,, entraine, sur quelques itérations, le modèle sur son propre ensemble de données d’apprentissageet en déduit les paramètres du modèle partiel, par exemple au moyen de l’algorithme du gradient stochastique de manière connue en soi. Chaque fournisseur de donnéesderenvoie ensuite à la plateforme, les paramètres de son modèle partiel après les avoir chiffrés en homomorphe, soit.Each data provider of decrypts the encrypted parameters in question using the private key, , trains, over a few iterations, the model on its own set of training data and deduces the parameters of the partial model , for example by means of the stochastic gradient algorithm in a manner known per se. Each data provider of then returns to the platform , the parameters of its partial model after having encrypted them in homomorphic form, i.e. . [0065] effectue alors la combinaison, dans le domaine homomorphe, des modèles partiels ainsi entraînés par les fournisseurs de.The aggregation platform then performs the combination, in the homomorphic domain, of the partial models thus trained by the providers of . [0066] . Autrement dit, pour les fournisseurs de données non sélectionnés,.According to a variant, the combination the combination can relate to all the partial models including those of the non-selected suppliers, that is to say belonging to . In other words, for unselected data providers, . [0067] In any case, the aggregation of the partial models can be achieved, for example, using a simple average: [0068] oùetsont respectivement les opérations d’addition et de multiplication externe dans le domaine homomorphe, et où, et où l’on a supposé ici pour des raisons de simplification de la présentation que lesfournisseurs de données étaient sélectionnés. or and are respectively the operations of addition and external multiplication in the homomorphic domain, and where , and where it has been assumed here for reasons of simplification of the presentation that the data providers were selected. [0069] We recall that an additive homomorphic cipher verifies the following properties: [0070] [0071] [0072] permet de réaliser une multiplication d’un chiffré par un clair, le résultat étant chiffré. Dans le cas présent, il permet de réaliser les multiplications par les termes. Pour éviter d’effectuer des opérations en calcul flottant (non réalisables dans le domaine homomorphe), on multipliera en pratique les termespar une constante de valeur élevée avant la multiplication.The operator allows to carry out a multiplication of a ciphertext by a clear one, the result being ciphered. In the present case, it makes it possible to carry out the multiplications by the terms . To avoid carrying out operations in floating point calculation (not realizable in the homomorphic domain), one will multiply in practice the terms by a high value constant before multiplication. [0073] au sein de l’expression (1) permet de conférer plus d’importance aux fournisseurs de données possédant des ensembles de données d’apprentissage de grande taille. Cette pondération suppose que les taillesdes différents ensembles de données d’apprentissage soient connues de la plateforme. Elles peuvent par exemple lui avoir été communiquées par les fournisseurs de données dans la phase d’initialisation.Weighting by within expression (1) gives more prominence to data providers with large training datasets. This weighting assumes that the sizes different sets of training data are known to the platform. They may for example have been communicated to it by the data providers during the initialization phase. [0074] We can deduce from expression (1) and expressions (2-1) and (2-2): [0075] [0076] It should be noted that more complex combination operations than that of (1) could be envisaged without departing from the scope of the present invention. In this case, FHE ( Full Homomorphic Encryption ) or SHE ( Somewhat Homomorphic Encryption ) homomorphic cryptosystems may be used. [0077] et peut procéder à une nouvelle sélection de fournisseur de données,.Once the combination has been made, the aggregation platform has the encrypted parameters in the homomorphic domain, i.e. and can make a new selection of data provider, . [0078] peut transmettre les paramètres chiffrés, oùest le nombre total d’itérations, à l’ensemble des fournisseurs de données et au destinataire supplémentaire, 150. Ceux-ci peuvent récupérer les paramètres du modèle en clair à l’aide de la clé privée,.At the end of the learning iterations, the platform can transmit encrypted parameters , or is the total number of iterations, to all data providers and the additional recipient, 150. These can retrieve the parameters of the model in plain text using the private key, . [0079] Fig. 2 schematically represents the exchanges between an aggregation platform and learning data providers in a federated learning method of an artificial neural network, according to a second embodiment of the invention. [0080] , et en 220 lesfournisseurs de données,.The aggregation platform has been represented at 210, , and in 220 the data providers, . [0081] , ne sont pas chiffrés dans le domaine homomorphe mais par un chiffrement par flot, au moyen d’une clé secrète,, soit. Les paramètres ainsi chiffrés font l’objet d’un transchiffrement par la plateforme d’agrégation au moyen de la clé secrète chiffrée dans le domaine homomorphe,pour fournir ces mêmes paramètres chiffrés en homomorphe. Par exemple, la plateforme effectue un second chiffrement dans le domaine homomorphe des paramètres chiffrés par la clé secrète, et déchiffre ensuite, dans le domaine homomorphe, au moyen de la clé, les paramètres doublement chiffrés. Le résultat du transchiffrement n’est autre que. On pourra trouver une description détaillée de cette étape de transchiffrement dans la demande FR-A-3060165 déposée au nom de la présente Demanderesse.The exchanges differ from the previous embodiment in the sense that the parameters transmitted by each data provider, , are not encrypted in the homomorphic domain but by a stream cipher, using a secret key, , that is . The parameters thus encrypted are trans-encrypted by the aggregation platform using the secret key encrypted in the homomorphic domain, to provide these same parameters encrypted in homomorphic form. For example, the platform performs a second encryption in the homomorphic domain of the parameters encrypted by the secret key, and then decrypts, in the homomorphic domain, using the key , double-encrypted parameters. The result of the transcryption is none other than . A detailed description of this transciphering step can be found in the application FR-A-3060165 filed in the name of the present Applicant. [0082] , est ensuite transmis aux fournisseurs de données sélectionnés.The combination of the partial models in the homomorphic domain is identical to that carried out by the platform in the first embodiment. The result, , is then transmitted to the selected data providers. [0083] ,, sont stockées sur la plateforme d’agrégation. Avantageusement, ces clés chiffrées étant relativement volumineuses, on choisira une seule clé secrète pour l’ensemble de fournisseurs de données, c’est-à-dire,, ce qui permet de n’avoir qu’une seule clé chiffrée,, à stocker sur la plateforme.The encrypted keys , , are stored on the aggregation platform. Advantageously, these encrypted keys being relatively voluminous, a single secret key will be chosen for the set of data providers, that is to say , , which allows to have only one encrypted key, , to be stored on the platform. [0084] Fig. 3 represents a flowchart of a method of federative learning of an artificial neural network, according to the first embodiment of the invention. [0085] The left part of the figure corresponds to the sequence of steps executed by the aggregation platform while the right part corresponds to the sequence of steps executed by the data providers. [0086] Steps 310 and 315 correspond respectively to an initialization of the platform and of the data providers. [0087] les paramètres initiaux du modèle. En 315, les fournisseurs de données construisent ou échangent une clé privée commune,d’un cryptosystème asymétrique homomorphe. La clé publique correspondante,, est transmise à la plateforme d’agrégation.In 310, the parameters of the model of the artificial neural network are initialized by random draw or from a coarse model. We notice the initial parameters of the model. At 315, the data providers construct or exchange a common private key, of a homomorphic asymmetric cryptosystem . The corresponding public key, , is transmitted to the aggregation platform. [0088] , soit.In 320, the platform homomorphically encrypts the parameters of the model by means of the public key , that is . [0089] et les paramètres du modèle courant étant notés synthétiquement.We then enter an iterative loop, the current iteration being denoted by the index and the parameters of the current model being denoted synthetically . [0090] defournisseurs de données parmi.At 330, the aggregation platform selects, randomly or not, a group of data providers among . [0091] .At 340, the platform transmits the parameters of the current model, encrypted in the homomorphic domain, i.e. . [0092] déchiffre lesdits paramètres à l’aide de la cléafin d’obtenir les paramètres en clair.At 325, each data provider in the group decrypts said parameters using the key in order to get the parameters in plain text. [0093] , du groupeeffectue une mise à jour des paramètres du modèle en l’entrainant sur son propre ensemble de données d’apprentissage,. Cette mise à jour peut être par exemple effectuée au moyen de l’algorithme de descente de gradient stochastique ou SGD (Stochastic Gradient Descent). A cette fin, l’ensemble d’apprentissagepeut être divisé en mini-lots et les paramètres du modèle sont mis à jour successivement sur les différents mini-lots par :At 335, each data provider, , of the group performs an update of the model parameters by training it on its own set of training data, . This updating can for example be carried out by means of the stochastic gradient descent or SGD ( Stochastic Gradient Descent ) algorithm. To this end, the learning set can be divided into mini-batches and the model parameters are successively updated on the different mini-batches by: [0094] oùest la fonction objectif sur le lot etest le taux d’apprentissage. or is the objective function on the lot and is the learning rate. [0095] dispose des paramètres d’un modèle partiel. Il chiffre les paramètres du modèle partiel en homomorphe, soitet les transmet à la plateforme d’agrégation.At the end of this update phase, at step 345, the data provider has the parameters of a partial model. It encrypts the parameters of the partial model in homomorphic form, i.e. and transmits them to the aggregation platform. [0096] In 350, the platform combines in the homomorphic domain the parameters of the partial models of the different data providers. As indicated previously, this combination can relate only to the partial models of the data providers selected in step 330 or else to the partial models of all the data providers. Different combination operators can be considered, such as a simple average, as indicated by expression (1). [0097] .The result of the combination gives, in 360, the parameters of the model, quantified in the homomorphic domain, for the new iteration, i.e. . [0098] est atteint. Dans la négative, on retourne à l’étape de sélection 330. En revanche, dans l’affirmative, la plateforme d’agrégation diffuse les paramètres du modèle obtenu au terme de la dernière itération, soit, à l’ensemble des fournisseurs de données (et le cas échéant à des destinataires additionnels comme vu en relation avec la Fig. 1). Ces derniers déchiffrent en 355 les paramètres du modèle, à l’aide de la clé privée commune, pour obtenir les paramètres du modèle en clair,.It is then tested at 370 if a stopping criterion is satisfied, for example if a predetermined number of iterations is reached. If not, we return to the selection step 330. On the other hand, if so, the aggregation platform disseminates the parameters of the model obtained at the end of the last iteration, that is , to all the data providers (and if necessary to additional recipients as seen in relation to FIG. 1). The latter decrypt in 355 the parameters of the model, using the common private key , to get the parameters of the model in plain text, . [0099] The data providers and, if applicable, the additional recipient(s) then have an artificial neural network that has been trained in a federative manner on the union of the data providers' training sets. [0100] Fig. 4 represents a flowchart of a method of federative learning of an artificial neural network, according to the second embodiment of the invention. [0101] Steps 410-470 are respectively identical or similar to steps 310-370 of FIG. 3. Only the differences are explained below. [0102] d’un chiffrement par flot. On rappelle que, selon une variante, cette clé secrète est commune aux fournisseurs de données ou à certains d’entre eux seulement. La clé secrète, chiffrée en homomorphe, soitest transmise à la plateforme d’agrégation avec la clé publique homomorphe.At step 415, in addition to a homomorphic cryptosystem common to all data providers, a secret key is generated, advantageously a symmetric key of a stream cipher. It is recalled that, according to a variant, this secret key is common to the data providers or to some of them only. The secret key , ciphered as a homomorph, i.e. is transmitted to the aggregation platform with the homomorphic public key . [0103] dispose des paramètres d’un modèle partiel. Au lieu de chiffrer les paramètres du modèle partiel en homomorphe comme dans le premier mode de réalisation, le fournisseur de données les chiffre au moyen du chiffrement symétrique,et les transmet à la plateforme.At step 445, the data provider has the parameters of a partial model. Instead of encrypting the parameters of the partial model in homomorph as in the first embodiment, the data provider encrypts them using symmetric encryption, and transmits them to the platform. [0104] . Plus simplement, l’opération de transchiffrement a ici pour objet de faire passer les paramètres d’un état où ils sont chiffrés par chiffrement symétrique à un état où ils sont chiffrés en homomorphe, sans passer par un état intermédiaire où ils seraient dévoilés en clair. Une fois transchiffrés en homomorphe, les modèles partiels sont combinés en 450 de la même manière qu’à l’étape 350.At step 447, the platform performs a transciphering of the parameters of the partial models of the selected data providers. In other words, these encrypted parameters are encrypted a second time in homomorphic and decrypted in the homomorphic domain using the encrypted key . More simply, the purpose of the transciphering operation here is to bring the parameters from a state where they are encrypted by symmetric encryption to a state where they are encrypted in homomorphic form, without going through an intermediate state where they would be revealed in clear . Once transciphered to homomorph, the partial models are combined in 450 in the same way as in step 350. [0105] ,à l’étape 440, lors de la première itération.The parameters of the partial models not selected during the current iteration can be obtained in encrypted form in the homomorphic domain, either that they have already been transciphered at a previous iteration, or that they have been encrypted in homomorphic by means of the public key , at step 440, during the first iteration. [0106] , les paramètres du modèle sous forme chiffrée en homomorphe.At the end of the federative learning process, the platform transmits to the various data providers and, if necessary, to the additional recipient(s) having the private key , the parameters of the model in encrypted form in homomorph. [0107] Regardless of the embodiment, the aggregation platform may use SHE or FHE encryption to perform batch operations ( batching ). For example, a description of such an encryption method can be found in the article by JS. Coron et al. entitled “Batch fully homomorphic encryption of the integers” published in Advances in Cryptology – EUROCRYPT 2013, Lecture Notes in Computer Science, vol 7881. Springer, Berlin, Heidelberg. [0108] The principle of batch processing is to multiplex several plaintexts to form a composite plaintext in order to obtain a single ciphertext. Thus, instead of encrypting the plaintexts independently of each other, a plaintext composite constructed from the plaintexts in question is encrypted. [0109] une pluralité de premiers clairs etun premier composite en clair construit par batching à partir de ces premiers clairs et si l’on noteune même pluralité de seconds clairs etun second composite en clair construit par batching à partir de ces seconds clairs, et si l’on chiffre les premier et second composites en question :Batch processing makes it possible to parallelize the same operation on a plurality of ciphers in the homomorphic domain. More precisely, if we note a plurality of clear primes and a first clear composite built by batching from these first clears and if we note the same plurality of clear seconds and a second plaintext composite built by batching from these second plaintexts, and if we encrypt the first and second composites in question: [0110] [0111] on peut alors effectuer une opération d’addition ou de multiplication en parallèle sur les chiffrés dans le domaine homomorphe en calculant à partir deetdes chiffrésettels que : we can then perform an addition or multiplication operation in parallel on the ciphers in the homomorphic domain by calculating from and numbers and such as : [0112] [0113] [0114] , en calculant, à partir d’un composite chiffré, un chiffré, tel que :In the same way, we can carry out in parallel a multiplication of figures by constants in clear , by calculating, from an encrypted composite , a number , such as : [0115] [0116] The aforementioned properties of batch processing make it possible to calculate in a small number of operations, in particular when the combination of the partial models is carried out by averaging (cf. expression (1)). [0117] représente de manière synthétique les différents paramètres d’un modèle partiel, autrement ditest un vecteur dont les composantessont lescoefficients synaptiques du réseau de neurones artificiels entrainé sur l’ensemble, à l’itération.Indeed, we recall that synthetically represents the different parameters of a partial model, in other words is a vector whose components are the synaptic coefficients of the artificial neural network trained on the set , at iteration . [0118] peut chiffrer un clair composite dont les éléments constitutifs sont les composantes depour obtenir le composite chiffré :The data provider can encrypt a composite plaintext whose constituent elements are the components of to get the encrypted composite: [0119] [0120] The platform can then use properties (6-1) and (7) to calculate: [0121] dans laquelle l’expression entre parenthèses est obtenue en effectuant une seule opération en homomorphe sur le composite au lieu deopérations en parallèle sur les paramètres du modèle. in which the expression in parentheses is obtained by performing a single homomorphic operation on the composite instead of parallel operations on model parameters. [0122] élevée), la complexité des calculs étant alors proportionnelle au nombred’utilisateurs (fournisseurs de données) et indépendant de la taille du modèle. Il est clair cependant que, pour des raisons pratiques, on pourra être amené à utiliser plusieurs composites en parallèle pour traiter des modèles de très grande taille (nombre deslotslimité dans unbatch).The parallelization of the calculations on the different parameters is particularly relevant for large models (value of high), the complexity of the calculations then being proportional to the number of users (data providers) and independent of model size. It is clear however that, for practical reasons, it may be necessary to use several composites in parallel to process very large models (limited number of slots in a batch ). [0123] d’utilisateurs (fournisseurs de données) est très grand devant la taille du modèle, on préférera appliquer lebatchingsur les différents utilisateurs. Autrement dit, on crée alorscomposites, leséléments constitutifs de chaque composite étant les paramètres de même indice des différents modèles partiels. Plus précisément le composite associé au paramètreest donné par.On notele chiffré correspondant. Conversely , when the number number of users (data providers) is very large compared to the size of the model, we would prefer to apply batching to the different users. In other words, we then create composites, the constituent elements of each composite being the parameters of the same index of the different partial models. More precisely the composite associated with the parameter is given by .We notice the corresponding number. [0124] The combination of the partial models by averaging (expression (1)) can then be simplified as indicated below. [0125] à partir du chiffrédont le déchiffrement donne :Certain homomorphic encryption methods, such as BGV, make it possible to achieve an accumulation of the plaintexts constituting the composite plaintext, as described in the original article by Z. Brakerski et al. titled “Fully homomorphic encryption without bootstrapping” published in Cryptology ePrint Archive, Report 2011/277. More precisely, we can obtain a second cipher from the cipher the decryption of which gives: [0126] [0127] In other words, after decryption, a second composite is obtained whose constituent elements are all equal to the sum of the clears constituting the starting composite. [0128] opérations de moyennage sur lesfournisseurs de données, chaque opération de moyennage portant sur un seul paramètre du modèle.The calculation of expression (1) can be performed using averaging operations on data providers, each averaging operation on a single model parameter. [0129] , on détermine d’abord, à partir deun chiffré,donnant par déchiffrement au moyen de , et l’on calcule un second chiffréen utilisant la propriété (10). Le résultat du calcul selon l’expression (1) est alors représenté par les valeurs chiffrées en homomorphe,,. Ces valeurs chiffrées sont transmises aux fournisseurs de données sélectionnés (et à la dernière itération à tous les fournisseurs de données ainsi qu’au(x) destinataire(s) additionnel(s)). Le déchiffrement de chaque valeur chiffréeau moyendonnemoyennes identiques dont, bien entendu, un seule est extraite par le fournisseur de données.From property (7), for each , we first determine, from a number, giving by decryption by means of , and we calculate a second figure using property (10). The result of the calculation according to expression (1) is then represented by the numerical values in homomorph, , . These encrypted values are transmitted to the selected data providers (and at the last iteration to all data providers as well as to the additional recipient(s)). The decryption of each encrypted value thanks to given identical averages of which, of course, only one is extracted by the data provider.
权利要求:
Claims (10) [0001] de fournisseurs de données () disposant chacun d’un ensemble de données d’apprentissage distinct () ainsi qu’une plateforme d’agrégation (), ladite méthode débutant par une étape d’initialisation (310,410) pour initialiser les paramètres du modèle, et se poursuivant par une pluralité d’itérations successives, chaque itération mettant à jour les paramètres d’un modèle courant du réseau de neurones, caractérisée en ce que :à l’étape d’initialisation, les fournisseurs de données partagent la clé privée et la clé publique d’un cryptosystème homomorphe () ;et qu’à chaque chaque itération, -la plateforme d’agrégation sélectionne (330,430) une sous-pluralitéde fournisseurs de données et transmet auxfournisseurs de données ainsi sélectionnés, les paramètres du modèle courant, chiffrés en homomorphe,; -chaque fournisseur de données sélectionné déchiffre les paramètres ainsi chiffrés au moyen de la clé privée du cryptosystème homomorphe (325,425), entraine le modèle courant sur son ensemble de données d’apprentissage (435,445) pour obtenir un modèle partiel, chiffre (345,445) les paramètres du modèle partiel ainsi obtenu et les transmet sous forme chiffrée à la plateforme d’agrégation ;la plateforme d’agrégation effectue, dans le domaine homomorphe, une combinaison des paramètres des modèles partiels obtenus par les différents fournisseurs de données, pour obtenir les paramètres d’un nouveau modèle courant, chiffrés dans le domaine homomorphe ;et, lorsqu’un critère d’arrêt est satisfait, chaque fournisseur de données déchiffre, au moyen de la clé privée du cryptosystème homomorphe, les paramètres du dernier modèle courant pour obtenir un modèle final, entrainé sur la réunion desdits ensembles de données d’apprentissage.Federative learning method of an artificial neural network model, the model being characterized by parameters giving the synaptic coefficients of said network, the learning method implementing a plurality data providers ( ) each having a separate set of training data ( ) as well as an aggregation platform ( ), said method beginning with an initialization step (310,410) to initialize the parameters of the model, and continuing with a plurality of successive iterations, each iteration updating the parameters of a current model of the neural network, characterized in that : at the initialization stage, the data providers share the private key and the public key of a homomorphic cryptosystem ( ); and that at each each iteration , - the aggregation platform selects (330,430) a sub-plurality from data providers and transmits to data providers thus selected, the parameters of the current model, encrypted in homomorphic form, ; - each selected data provider decrypts the parameters thus encrypted by means of the private key of the homomorphic cryptosystem (325,425), trains the current model on its training data set (435,445) to obtain a partial model, encrypts (345,445) the parameters of the partial model thus obtained and transmits them in encrypted form to the aggregation platform; the aggregation platform performs, in the homomorphic domain, a combination of the parameters of the partial models obtained by the various data providers, to obtain the parameters of a new current model, encrypted in the homomorphic domain; and, when a stopping criterion is satisfied, each data provider decrypts, by means of the private key of the homomorphic cryptosystem, the parameters of the last current model to obtain a final model, trained on the union of said data sets of learning. [0002] , chaque fournisseur de données sélectionné chiffre les paramètres du modèle partiel au moyen de la clé publique du cryptosystème homomorphe et transmet les paramètres du modèle partiel ainsi chiffrés,, à la plateforme d’agrégation.Method of federative learning of an artificial neural network model according to claim 1, characterized in that at each iteration , each selected data provider encrypts the parameters of the partial model by means of the public key of the homomorphic cryptosystem and transmits the parameters of the partial model thus encrypted, , to the aggregation platform. [0003] Method for federative learning of an artificial neural network model according to claim 1 or 2, characterized in that the aggregation platform performs an averaging (350), in the homomorphic domain, of the respective parameters of the partial models obtained by the various suppliers selected. [0004] , et transmet à la plateforme d’agrégation la clé publique du cryptosystème homomorphe, chiffrée au moyen de cette clé secrète,.Method of federated learning of an artificial neural network model according to claim 1, characterized in that at the initialization step, each data provider also generates (415) a secret stream cipher key, , and transmits to the aggregation platform the public key of the homomorphic cryptosystem, encrypted using this secret key, . [0005] , chaque fournisseur de données sélectionné,, chiffre (445) les paramètres du modèle partiel au moyen de sa clé secrète,, et transmet les paramètres du modèle partiel ainsi chiffrés,, à la plateforme d’agrégation.Method of federative learning of an artificial neural network model according to claim 4, characterized in that at each iteration , each selected data provider, , encrypts (445) the parameters of the partial model by means of its secret key, , and transmits the parameters of the partial model thus encrypted, , to the aggregation platform. [0006] , la plateforme d’agrégation effectue un transchiffrement (447) des paramètres des modèles partiels des différents fournisseurs de données sélectionnés, pour obtenir ces mêmes paramètres chiffrés dans le domaine homomorphe.Method of federative learning of an artificial neural network model according to claim 5, characterized in that at each iteration , the aggregation platform performs a transciphering (447) of the parameters of the partial models of the various data providers selected, to obtain these same parameters encrypted in the homomorphic domain. [0007] Method of federative learning of an artificial neural network model according to claim 6, characterized in that the aggregation platform performs an averaging (450), in the homomorphic domain, of the respective parameters of the partial models thus transciphered. [0008] Method for federative learning of an artificial neural network model according to one of Claims 4 to 7, characterized in that the secret keys of the various suppliers the data suppliers are identical, the common secret key having been shared between the various data providers using a multi-party key exchange protocol. [0009] Method for federative learning of an artificial neural network model according to claim 3 or 7, characterized in that the platform performs parallel averaging of the various parameters of the partial models, by means of batch processing on a cipher homomorph representing a composite of the various parameters of the partial models. [0010] Method of federative learning of an artificial neural network model according to claim 3 or 7, characterized in that the platform performs an average of the parameters of the same rank of the partial models, by means of a batch processing on a cipher homomorphic representing a composite of the parameters of this rank of the partial models of the different data providers.
类似技术:
公开号 | 公开日 | 专利标题 EP3301617B1|2021-05-19|Methods for secure learning of parameters of a convolutional neural network, and secure classification of input data EP3751468A1|2020-12-16|Method for collaborative learning of an artificial neural network without revealing learning data Peter et al.2013|Efficiently outsourcing multiparty computation under multiple keys Alexandru et al.2018|Cloud-based MPC with encrypted data EP3506550B1|2020-09-23|Providing security against user collusion in data analytics using random group selection EP3535923B1|2020-08-05|Method for secure classification using a transcryption operation EP3211823B1|2018-01-03|Method for confidential execution of a program operating on data encrypted by means of homomorphic encryption Togan et al.2014|Comparison-based computations over fully homomorphic encrypted data EP3506556B1|2020-08-05|Method of authenticated key exchange via blockchain EP2707989A1|2014-03-19|Device and method for generating keys with enhanced security for fully homomorphic encryption algorithm FR3057374B1|2019-10-25|ENCRYPTION METHOD, DECRYPTION METHOD, DEVICE AND CORRESPONDING COMPUTER PROGRAM. FR3086417A1|2020-03-27|CRYPTOGRAPHIC METHOD FOR SECURE COMPARISON OF TWO SECRET DATA X AND Y Li et al.2017|Toward single-server private information retrieval protocol via learning with errors Titiu2020|New Encryption Schemes and Pseudo-Random Functions with Advanced Properties from Standard Assumptions FR2892251A1|2007-04-20|CRYPTOGRAPHIC METHOD IMPLEMENTING AN IDENTITY-BASED ENCRYPTION SYSTEM FR3095537A1|2020-10-30|CONFIDENTIAL DATA CLASSIFICATION METHOD AND SYSTEM WO2009098379A2|2009-08-13|Method of sharing a strong secret between two parties, one of whom has little processing power WO2021229157A1|2021-11-18|Cryptographic method, systems and services for assessing univariate or multivariate true value functions on encrypted data Arora et al.2020|Privacy-Preserving MLaas on Encrypted Data using Crypten EP3008851B1|2018-04-04|System and method for delegating bilinear pairing computations to a server FR3035293A1|2016-10-21| FR3087066A1|2020-04-10|LOW CALCULATION LATENCY TRANSCRYPTION METHOD Rajput et al.2019|Example Based Privacy-Preserving Video Color Grading Rajput et al.2020|A privacy-preserving protocol for efficient nighttime haze removal using cloud based automatic reference image selection and color transfer as a service Soare2020|Secure Cumulative Reward Maximization in Linear Stochastic Bandits
同族专利:
公开号 | 公开日 FR3097353B1|2021-07-02| US20200394518A1|2020-12-17| EP3751468A1|2020-12-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR3060165A1|2016-12-09|2018-06-15|Commissariat A L'energie Atomique Et Aux Energies Alternatives|SECURE CLASSIFICATION METHOD USING TRANSCHIFFREMENT OPERATION| CN109684855A|2018-12-17|2019-04-26|电子科技大学|A kind of combined depth learning training method based on secret protection technology| CN113553610B|2021-09-22|2021-12-31|哈尔滨工业大学|Multi-party privacy protection machine learning method based on homomorphic encryption and trusted hardware| CN113660080B|2021-10-20|2021-12-14|北京金鸿睿信息科技有限公司|Safe multi-party calculation and federal analysis technology|
法律状态:
2020-06-30| PLFP| Fee payment|Year of fee payment: 2 | 2020-12-18| PLSC| Publication of the preliminary search report|Effective date: 20201218 | 2021-06-30| PLFP| Fee payment|Year of fee payment: 3 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1906241A|FR3097353B1|2019-06-12|2019-06-12|COLLABORATIVE LEARNING METHOD OF AN ARTIFICIAL NEURON NETWORK WITHOUT DISCLOSURE OF LEARNING DATA| FR1906241|2019-06-12|FR1906241A| FR3097353B1|2019-06-12|2019-06-12|COLLABORATIVE LEARNING METHOD OF AN ARTIFICIAL NEURON NETWORK WITHOUT DISCLOSURE OF LEARNING DATA| EP20179039.1A| EP3751468A1|2019-06-12|2020-06-09|Method for collaborative learning of an artificial neural network without revealing learning data| US16/899,056| US20200394518A1|2019-06-12|2020-06-11|Method for collaborative learning of an artificial neural network without disclosing training data| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|